home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
Turnbull China Bikeride
/
Turnbull China Bikeride - Disc 1.iso
/
DEMON
/
UTILS
/
LZH_FRONT.ZIP
/
!Lzh
/
LZHModules
/
ReadMe
< prev
Wrap
Text File
|
1997-04-28
|
8KB
|
190 lines
Help file for the LZH module (version 0.02).
———————————————————————————————————————————————————
LZH as freeware:
————————————————
This directory and it's contents are FREEWARE, and all it's files may be
copied freely in part or whole provided that they are not altered and no
charge is made for them. This means you can use any of the modules provided in
any application you choose including commerical software provided they are not
alterd and I am credited for them.
The modules contained within this directory are version 0.02 of my LZH
module. They are similar to John Kortink's LZW modules but they provide
significantly greater compression with similar overheads (only 12k on
decompression). The compression techniques used are a variable size Finite
window technique with secondary dynamic huffman compression similar to
gzip/zip/pkzip (but not the same). The modules are written in 100% lovingly
handcrafted ARM code and are very fast (relatively) on compress and
decompress.
There are currently two modules, LZH and LZHD. The LZH module allows both
compression/decompression of files, and the LZHD module allows just
decompression of files.
Note on Wacky-Talky versions:
—————————————————————————————
Along with the system module versions there are also versions of the modules
supplied in Wacky-Talky format. Wacky-Talky is a link manager which provides a
nice new module format and a huge load of lovely extras. Copies of Wacky-Talky
can be obtained from hensa. The Wacky-Talky DLR chunk number for LZH is &400
and for LZHD is &401, and these modules provide support for automatic
decompression of WT modules.
Compression/Decompression Speeds:
—————————————————————————————————
These test were run on my Risc PC 600 with the screen switched off (i.e. no
screen overheads). :
Data Source code (114k) Complex Screen (384k) Large text file (782k)
(&1C92F bytes) (&600B8 bytes) (&C3649 bytes)
Write Read Size Write Read Size Write Read Size
Package
gzip -9 440 93 &9901 1745 117 &4E98 3507 339 &3CA34
LZH 0.01 527 76 &977E 5074 66 &4D14 - - -
LZH 0.02 469 74 &977E 3567 64 &4D14 4672 310 &3BA17
Unfortunatly it is difficult to provide an accurate means of speed comparison.
Needless to say due to the different "tuning" of algorithms there are cases
where the one will be "better" than the other. In fact LZH often compresses
faster (where gzip's "speed for the cost of compression" techniques don't
help), and gzip sometimes decompresses faster. Generaly though the LZH modules
compress better than gzip because of the use of a 64k window (this is also the
reason why LZH isn't SIGNIFICANTLY faster on compressthan gzip).
If you have any comments/bugs/ideas, then mail me at oramd@cs.man.ac.uk and I
will be only too happy to sort them out.
Dan.
LZH SWIs:
—————————
NOTE: In the case of the Wacky-Talky modules, all registers except R0 are
moved up one (i.e. R1 becomes R2, R2 becomes R3 etc.).
LZH_Compress (SWI &5F401):
——————————————————————————
On Entry:
R0 = flags:
bit 0: 0=> no effect.
1=> R1 is file handle.
bit 1: 0=> no effect.
1=> R1 is pointer to memory location for compressed data.
bit 2: 0=> no effect.
1=> use less memory for compress at the cost of speed
(not particularly useful in this version).
R1 = If bits 0 & 1 of R0 are unset then pointer to filename.
R2 = Pointer to start of memory to compress.
R3 = Length of data (in bytes) to be compressed.
On Exit:
If bit1 of R0 is set on entry
R0 = Compressed size.
All other registers preserved
Otherwise
All registers preserved.
This call saves the block of memory indicated by R2 and R3 to the
filename/file handle/memory pointed to by R1. The save block will be checked
for validity and an error returned if it contains addresses outside valid
memory. The compression routines require varying amounts of memory. This will
be about 640k for all files >64k in length and significantly less for shorter
files - this is a bit on the heavy side, but is necessary to keep speed within
sensible boundaries. Compression, although relatively very fast for this type
can be INCREDIBLY slow (minutes for a few hundred k in certain special cases),
so don't think the computer has crashed if the disc light hasn't flickered for
ages. I have made no attempt to deal with this since the module is intended as
a programmers tool for mainly read use and reading does not suffer from the
same trouble.
NOTE: Obviously there is no support for this SWI in the read only
module.
LZH_DeCompress & LZHD_DeCompress (SWI &5F400 & SWI 5F440 in LZHD):
——————————————————————————————————————————————————————————————————
On Entry:
R0 = flags:
bit 0: 0=> no effect.
1=> R1 is file handle.
This will use the current file pointer as start of
of compressed data and will return with the file
pointer pointing to the byte after the end of the
LZH data.
bit 1: 0=> no effect.
1=> R1 is pointer to memory location for compressed data.
This memory is validated.
bit 2: 0=> no effect.
1=> If set then no decompression is performed and the
uncompressed size is returned only.
R1 = If bits 0 & 1 of R0 are unset then pointer to filename.
R2 = Address in memory to decompress file to, this will be validated
to the uncompressed length.
On Exit:
Exit: R0 = uncompressed size.
R2 = Pointer to byte after source block (if memory->memory).
All other registers preserved.
This call allows decompression of data from a file/memory to the memory
address pointed to by R2. Any memory will be validated and an error returned if
it is invalid, although the file will NOT be checked whilst decompressing to
see if it is a valid LZH file (this would greatly impair performance). The
file pointed to by R1 can be of any type. It is worth noting decompression
only requires 12k and is very fast.
Further notes on compression:
—————————————————————————————
These modules are designed for the compression of code/data within
applications. It may also be the case that you wish to make auto
decompressing code/modules etc. and for this I would recommend !Crunch by
BASS (you can get it from their web page or possibly from p.d. libraries).
This provides less compression than these LZH modules (not always much less,
but sometimes a lot less), and the decompression requires no overheads. If
you're trying to compress something I recommend checking this out. For
interests sake I think it uses just a finite window compression technique
(but it seems a bit fast for this, unless it is using a small window).
History:
————————
0.01 : (revisions to 0.01 beta test)
- Added in extra options to allow memory to memory, file handle
to memory etc. transfers.
- Realised that I'd made a very stupid mistake with the dynamic
huffman routines, and they were claiming twice as much memory as
needed.
- Made the LZHD module claim memory only when it needed it.
- Finally included the facility to check on the uncompressed size of
a file.
0.02 :
- Made Wacky-Talky versions of the LZH modules.
- Minor change to LZH_DeCompress, if file handle is supplied it now
takes the current file pointer as start of input stream.
- LZH_DeCompress now returns updated file pointer or address of next
byte in the input stream (in R2).
- Slight speed improvement (re-coded a couple of loops and changed
hash key algorithm).
- Removed the less memory option because it was useless.
- Stopped the 64k finite window being used as default. The size of
the window is now equivalent to the size of the file (thus less
memory is used) up to a maximum of 64k.